Web Survey Bibliography
In the last two decades, Web or Internet surveys have had a profound impact on the survey world. The change has been felt mostly strongly in the market research sector, with many companies switching from telephone surveys or other modes of data collection to online surveys. The academic and public policy/social attitude sectors were a little slower to adopt, being more careful about evaluating the effect of the change on key surveys and trends, and conducting research on how best to design and implement Web surveys. The public sector (i.e., government statistical offices) has been the slowest to embrace Web surveys, in part because the stakes are much higher, both in terms of the precision requirements of the estimates and in terms of the public scrutiny of such data. However, National Statistical Offices (NSOs) are heavily engaged in research and development with regard to Web surveys, mostly notably as part of a mixedmode data collection strategy, or in the establishment survey world, where repeated measurement and quick turnaround are the norm. Along with the uneven progress in the adoption of Web surveys has come a number of concerns about the method, particularly with regard to the representational or inferential aspects of Web surveys. At the same time, a great deal of research has been conducted on the measurement side of Web surveys, developing ways to improve the quality of data collected using this medium. This seminar focuses on these two key elements of Web surveys — inferential issues and measurement issues. Each of these broad areas will be covered in turn in the following sections. The inferential section is largely concerned with methods of sampling for Web surveys, and the associated coverage and nonresponse issues. Different ways in which samples are drawn, using both non-probability and probability-based approaches, are discussed. The assumptions behind the different approaches to inference in Web surveys, the benefits and risks inherent in the different approaches, and the appropriate use of particular approaches to sample selection in Web surveys, are reviewed. The following section then addresses a variety of issues related to the design of Web survey instruments, with a review of the empirical literature and practical recommendations for design to minimize measurement error.
A total survey error framework (see Deming, 1944; Kish, 1965; Groves, 1989) is useful for evaluating the quality or value of a method of data collection such as Web or Internet surveys. In this framework, there are several different sources of error in surveys, and these can be divided into two main groups: errors of non-observation and errors of observation. Errors of nonobservation refer to failures to observe or measure eligible members of the population of interest, and can include coverage errors, sampling errors, and nonresponse errors. Errors of nonobservation are primarily concerned about issues of selection bias. Errors of observation are also called measurement errors (see Biemer et al., 1991; Lessler and Kalsbeeck, 1992). Sources of measurement error include the respondent, the instrument, the mode of data collection and (in interviewer-administered surveys) the interviewer. In addition, processing errors can affect all types of surveys. Errors can also be classified according to whether they affect the variance or bias of survey estimates, both contributing to overall mean square error (MSE) of a survey statistic. A total survey error perspective aims to minimize mean square error for a set of survey statistics, given a set of resources. Thus, cost and time are also important elements in evaluating the quality of a survey. While Web surveys generally are significantly less expensive than other modes of data collection, and are quicker to conduct, there are serious concerns raised about errors of non-observation or selection bias. On the other hand, there is growing evidence that using Web surveys can improve the quality of the data collected (i.e., reduce measurement errors) relative to other modes, depending on how the instruments are designed. Given this framework, we first discuss errors of non-observation or selection bias that may raise concerns about the inferential value of Web surveys, particularly those targeted at the general population. Then in the second part we discuss ways that the design of the Web survey instrument can affect measurement errors.
EUSTAT Homepage (abstract) / (full text)
Web survey bibliography - 2011 (358)
- The Validity of Surveys: Online and Offline; 2016; Wiersma, W.
- Computer science security research and human subjects: Emerging considerations for research ethics boards...; 2013; Buchanan, E. A., Aycock, J., Dexter, S., Dittrich, D., Hvizdak, E. E.
- Multiple Sources of Nonobservation Error in Telephone Surveys: Coverage and Nonresponse; 2011; Peytchev, A.; Carley-Baxter, L. R.; Black, M. C.
- Online Questionnaires for Outbreak Investigations; 2011; Parry, A. E.; Johnson, D. R.; Byron-Gray, K.; Raupach, J. C. A.; McPherson, M.
- Inventory of published research: Response burden measurement and reduction in official business statistics...; 2011; Giesen, D. & Snijkers, G. (Eds.), Bavdaz, M., Bergstrom, Y., Gravem, D. F., Haraldsen, G., Hedlin, D...
- Effects of speeding on satisficing in Mixed-Mode Surveys; 2011; Bathelt, S., Bauknecht, J.
- Using Research-Based Practices to Increase Response Rates of Web-Based Surveys; 2011; Perkins, R. A.
- Using break-offs in web interviews for predicting web response in mixed mode surveys; 2011; Beukenhorst, D.
- Web panels in Slovenia; 2011; Lenar, J.
- Traditional and non-traditional treatments for autism spectrum disorder with seizures: an on-line survey...; 2011; Frye, R. E., Sreenivasula, S., Adams, J. B.
- Understanding the new digital divide—A typology of Internet users in Europe; 2011; Brandtzæg, P.B.; Heim, J.; Karahasanoviæ, A.
- Patients’ attitudes toward side effects of antidepressants: an Internet survey; 2011; Kikuchi, T., Uchida, H., Suzuki, T., Watanabe, K., Kashima, H.
- Web-based or paper-based surveys: a quandary?; 2011; Bennett, L., Sid Nair, C.
- Refining the Total Survey Error Perspective; 2011; Smith, T. W.
- ELIPSS: Étude Longitudinale par Internet Pour les Sciences Sociales; 2011; Legleye, S., Lesnard, L.
- Less questions, more data: Revitalizing the european currency in single source affluent audience measurement...; 2011; Hartman, H.
- Linking website exposure data to survey data: A single-source solution; 2011; Krahn, J., Landi, J., Melton, E.
- Inference in surveys with sequential mixed-mode data collection; 2011; Buelens, B., van der Brakel, J.
- Using a Probability-based Online Panel to Survey American Jews; 2011; Wright, G., Phillips, B. T., Tobias, J., Peugh, J., Semans, K.
- Choice of Content Presentation Mode in Web-Based Survey Administration; 2011; Osborn, L., Mansfield, W., Ramirez, C. M., Lacey, J. N., etc.
- Seasonal Yield Variation and Related Response Patterns in Address-based Mail Samples; 2011; DiSogra, C., Hendarwan, E.
- Gender-specific on-line shopping preferences; 2011; Ulbrich, F., Christensen, T., Stankus, L.
- Mixing modes in the LFS - Computer-assisted, cost effective and respondent friendly; 2011; Koerner, T., van der Valk, J.
- Peanuts and Monkeys: Incentivisation and engagement in online access panels; 2011; Marks, B.
- Establishing Cross-National Equivalence of Measures of Xenophobia: Evidence from Probing in Web Surveys...; 2011; Braun, M., Behr, D., Kaczmirek, L.
- Methodological challenges in the use of the Internet for scientific research: Ten solutions and recommendations...; 2011; Reips, U.-D., Buchanan, T., Krantz, J. H., McGrawn, K.Reips, U.-D.
- Search and email still top the list of most popular online activities; 2011; Purcell, K.
- Using Internet in Stated Preference Surveys: A Review and Comparison of Survey Modes; 2011; Lindhjem, H., Navrud, S.
- On the experience and evidence about mixing modes of data collection in large-scale surveys where the...; 2011; Dex, S., Gumy, J.
- Survey Gamification: Old Wine in New Bottles?; 2011; Baker, R. P.
- The Game Experiments: Researching how gaming techniques can be used to improve the quality of feedback...; 2011; Sleep, D., Puleston, J.
- Statistical Estimation of Word Acquisition With Application to Readability Prediction; 2011; Kidwell, P., Lebanon, G., Collins-Thompson, K.
- What is Probit; 2011
- Voice-of-the-customer marketing: A revolutionary 5-step process to create customers who care, spend,...; 2011; Roman, E.
- User agent; 2011
- Unpublisihed internal Google report on break off rates by device type; 2011; Callegaro, M.
- Toward wiser public judgment; 2011; Yankelovich, D., Friedman, W.
- The impact of cookie deletion on site-server and ad-server metrics in Australia. An empirical comScore...; 2011
- The changing role of address-based sampling in survey research; 2011; Iannacchione, V. G.
- State of mobile measurement; 2011; Gluck, M.
- Some issues in the application of latent class models for questionnaire design; 2011; Biemer, P. P., Berzofsky, M.
- Self-administered mobile surveys; 2011; Bosnjak, M.
- SDSC Announces scalable, high-performance data storage cloud; 2011
- Ratings and audience measurement; 2011; Napoli, P. M.
- Randomized response models in survey sampling. Randomized response models; 2011; Hussain, Z.
- Online survey research: Findings, best practices, and future research. Report prepared for the Advertising...; 2011; Vannette, D.
- Online survey research: Findings, Best practices, and future research; 2011
- New Esomar survey on use of cookies and tracking technologies; 2011
- Mobile, webmail, desktops: Where are we viewing email now?; 2011
- Measuring americans' issue priorities. A new version of the most important problem question reveals...; 2011; Yeager, D. S., Larson, S. B., Krosnick, J. A., Tompson, T.